7 research outputs found

    Contrastive Multimodal Learning for Emergence of Graphical Sensory-Motor Communication

    Full text link
    In this paper, we investigate whether artificial agents can develop a shared language in an ecological setting where communication relies on a sensory-motor channel. To this end, we introduce the Graphical Referential Game (GREG) where a speaker must produce a graphical utterance to name a visual referent object while a listener has to select the corresponding object among distractor referents, given the delivered message. The utterances are drawing images produced using dynamical motor primitives combined with a sketching library. To tackle GREG we present CURVES: a multimodal contrastive deep learning mechanism that represents the energy (alignment) between named referents and utterances generated through gradient ascent on the learned energy landscape. We demonstrate that CURVES not only succeeds at solving the GREG but also enables agents to self-organize a language that generalizes to feature compositions never seen during training. In addition to evaluating the communication performance of our approach, we also explore the structure of the emerging language. Specifically, we show that the resulting language forms a coherent lexicon shared between agents and that basic compositional rules on the graphical productions could not explain the compositional generalization

    Language-biased image classification: evaluation based on semantic representations

    Get PDF
    International audienceHumans show language-biased image recognition for a word-embedded image, known as picture-word interference. Such interference depends on hierarchical semantic categories and reflects that human language processing highly interacts with visual processing. Similar to humans, recent artificial models jointly trained on texts and images, e.g., OpenAI CLIP, show language-biased image classification. Exploring whether the bias leads to interference similar to those observed in humans can contribute to understanding how much the model acquires hierarchical semantic representations from joint learning of language and vision. The present study introduces methodological tools from the cognitive science literature to assess the biases of artificial models. Specifically, we introduce a benchmark task to test whether words superimposed on images can distort the image classification across different category levels and, if it can, whether the perturbation is due to the shared semantic representation between language and vision. Our dataset is a set of word-embedded images and consists of a mixture of natural image datasets and hierarchical word labels with superordinate/basic category levels. Using this benchmark test, we evaluate the CLIP model. We show that presenting words distorts the image classification by the model across different category levels, but the effect does not depend on the semantic relationship between images and embedded words. This suggests that the semantic word representation in the CLIP visual processing is not shared with the image representation, although the word representation strongly dominates for word-embedded images

    Watching artificial intelligence through the lens of cognitive science methodologies

    No full text
    See also https://developmentalsystems.org/watch_ai_through_cogsciUtilizing a large size of language models contributes to recent advances in machine learning literature. However, interpreting what representations these models acquire is challenging due to their computational complexity. How can we understand the models functionally? This blog post suggests importing cognitive science methodologies to interpret their functional mechanisms. We introduce a way to conduct cognitive science experiments for machine learning models and analyze the models’ internal representation using cognitive neuroscience methodologies

    Watching artificial intelligence through the lens of cognitive science methodologies

    No full text
    See also https://developmentalsystems.org/watch_ai_through_cogsciUtilizing a large size of language models contributes to recent advances in machine learning literature. However, interpreting what representations these models acquire is challenging due to their computational complexity. How can we understand the models functionally? This blog post suggests importing cognitive science methodologies to interpret their functional mechanisms. We introduce a way to conduct cognitive science experiments for machine learning models and analyze the models’ internal representation using cognitive neuroscience methodologies

    Watching artificial intelligence through the lens of cognitive science methodologies

    No full text
    See also https://developmentalsystems.org/watch_ai_through_cogsciUtilizing a large size of language models contributes to recent advances in machine learning literature. However, interpreting what representations these models acquire is challenging due to their computational complexity. How can we understand the models functionally? This blog post suggests importing cognitive science methodologies to interpret their functional mechanisms. We introduce a way to conduct cognitive science experiments for machine learning models and analyze the models’ internal representation using cognitive neuroscience methodologies

    Emergence of Shared Sensory-motor Graphical Language from Visual Input

    No full text
    The framework of Language Games studies the emergence of languages in populations of agents. Recent contributions relying on deep learning methods focused on agents communicating via an idealized communication channel, where utterances produced by a speaker are directly perceived by a listener. This comes in contrast with human communication, which instead relies on a sensory-motor channel, where motor commands produced by the speaker (e.g. vocal or gestural articulators) result in sensory effects perceived by the listener (e.g. audio or visual). Here, we investigate if agents can evolve a shared language when they are equipped with a continuous sensory-motor system to produce and perceive signs, e.g. drawings. To this end, we introduce the Graphical Referential Game (GREG) where a speaker must produce a graphical utterance to name a visual referent object consisting of combinations of MNIST digits while a listener has to select the corresponding object among distractor referents, given the produced message. The utterances are drawing images produced using dynamical motor primitives combined with a sketching library. To tackle GREG we present CURVES: a multimodal contrastive deep learning mechanism that represents the energy (alignment) between named referents and utterances generated through gradient ascent on the learned energy landscape. We, then, present a set of experiments and metrics based on a systematic compositional dataset to evaluate the resulting language. We show that our method allows the emergence of a shared, graphical language with compositional properties
    corecore